Philosophy Dictionary of Arguments

Home Screenshot Tabelle Begriffe

 
Empiricism: a branch within epistemology which assumes that sensory perception is fundamental for setting up claims and theories. The opposite position, rationalism, assumes that even purely logical knowledge and conclusions from this knowledge may be sufficient for the building of theories. See also logical positivism, instrumentalism, rationalism, epistemology, theories, foundation, experiments, > inferentialism, knowledge, experience, science.
_____________
Annotation: The above characterizations of concepts are neither definitions nor exhausting presentations of problems related to them. Instead, they are intended to give a short introduction to the contributions below. – Lexicon of Arguments.

 
Author Concept Summary/Quotes Sources

Economic Theories on Empiricism - Dictionary of Arguments

Parisi I 30
Empiricism/Economics theories/Gelbach/Klick: The central problem in much empirical work is omitted variables bias.
a) Sometimes this problem can be solved by controlling for more covariates—if the problem is omission, then inclusion should be a good solution. But this solution is often not feasible, because many omitted variables will be unknown to the researcher, and still others that theory suggests should be included will be unavailable or unquantifiable. Despite these issues, simply adding more control variables was standard operating procedure in empirical law and economics before the mid-1990s.
b) Another approach was to admit the existence of the bias but to assert that the bias necessarily is in a given direction or to speculate about its probable magnitude. But if there are multiple omitted variables, this approach is more problematic, because the sign and magnitude of the bias from excluding omitted variables then depends on the relationship between the policy variable of interest and all the omitted variables, as well as the signs and magnitudes of the coefficients on those omitted variables’ coefficients.*
Randomized controlled experiments: In the mid-1990s, many empirical micro-economists began to shift focus to research designs they motivated in terms linked to the method of randomized controlled experiments. Omitted variable bias is not a concern in such experiments since the “treatment” is assigned randomly, so that assignment is statistically independent of any otherwise important omitted variables. In a random assignment experiment, average treatment effects can then be measured simply, using the average change in the outcome of interest for the experimental treatment group, minus the average change in the experimental control group.
Parisi I 30 FN
Estimation: (...)that average effects are not the only type of treatment effects that can be esti-
mated. For examples of studies that consider distributional effects, see Heckman,
Smith, and Clements (1997)(3) and Bitler, Gelbach, and Hoynes (2006)(4).
Parisi I 31
Randomized controlled experiments: Empirical law and economics embraced this approach, implementing so-called difference-in-differences research designs to examine a host of legal changes. In general, this approach compares the change in outcomes in jurisdictions adopting a given policy with any contemporaneous change in non-adopting jurisdictions.
Policy changes: Some studies bearing the “natural experiments” moniker (...) use instrumental variables to purge their estimates of endogenous policy choice. A valid instrumental variable in this context is one that is correlated with the adoption of a policy change, but not otherwise correlated with the outcome of interest. The first requirement is easy
Parisi I 32
to demonstrate empirically, if it holds. But the second requirement, which is an “exactly identifying assumption,” cannot be tested and therefore is adopted only because it appears reasonable in context; intuition may be the only real guide to whether the second condition holds.
Causality: Obtaining causal estimates from non-experimental data always requires a judgment that omitted variables bias can be eliminated, so that treatment and comparison jurisdictions can be made comparable. This might be done by adding covariates, by using difference in differences, by using instrumental variables, or by using some other approach (...).
Experiments/generalization: (...) perhaps the most important limitation on the usefulness of natural experiments-motivated work, involves the degree of generalizability, or “external validity.” The most plausibly exogenous natural experiments may be the ones in which the “shocks” inducing identifying variation are the most limited in terms of what they can tell us about the effects of policy change in other settings. That is, precisely the oddity that gives rise to the shock may make the effects we can estimate from the shock least relevant to other circumstances of interest. This problem has contributed both to Angus Deaton’s criticism of the natural experiment methodology (2010)(5) and to other authors’ arguments in favor of structural econometric methods to generate estimates that can be more policy relevant than those provided by quasi-experimental methods (see, e.g., Nevo and Whinston, 2010(6); Heckman and Urzúa, 2010(7). For a response, see Imbens, 2010)(8) (ImbensVsKeckman).
Internal validity: Even regarding internal validity, the credibility of a quasi-experimental research design depends crucially on untestable assumptions concerning which treatment and comparison groups are sufficiently comparable. (...)(see, e.g., Abadie, Diamond, and Hainmueller, 2010(9). More generally, see Rosenbaum, 2010(10)).
Natural experiments: Some natural experiment designs also generate problems with respect to statistical inference, to the degree that the policy shocks are sticky over time, necessitating careful attention to hypothesis testing and covariance estimation (Bertrand, Duflo, and Mullainathan, 2004(11); Cameron, Gelbach, and Miller, 2008(12), 2011(13)).

*On omitted variables bias with multiple omitted variables, see Greene (2012)(1); for an approach to the omitted variables bias formula that views omitted variables bias in terms of the joint heterogeneity due to all omitted variables bias simultaneously, see Gelbach (2016)(2).

1. Greene, William H. (2012). Econometric Analysis. 7th edition, Upper Saddle Lake, NJ: Prentice Hall.
2. Gelbach, Jonah B. (2016). “When Do Covariates Matter? And Which Ones, and How Much?” Journal of Labor Economics 34: 509–543.
3. Heckman, James J., Jeffrey Smith, and Nancy Clements (1997). “Making the Most Out of Programme Evaluations and Social Experiments: Accounting for Heterogeneity in Programme Impacts.” Review of Economic Studies 64(4): 487–535.
4. Bitler, Marianne P., Jonah B. Gelbach, and Hilary W. Hoynes (2006). “What Mean Impacts Miss: Distributional Effects of Welfare Reform Experiments.” American Economic Review 96(4): 988–1012.
5. Deaton, Angus (2010). “Instruments, Randomization, and Learning about Development.” Journal of Economic Literature 48(2): 424–455.
6. Nevo, Aviv and Michael D. Whinston (2010). “Taking the Dogma Out of Econometrics: Structural Modeling and Credible Inference.” Journal of Economic Perspectives 24(2): 69–82.
7. Heckman, James J. and Sergio Urzúa (2010). “Comparing IV with Structural Models: What Simple IV Can and Cannot Identify.” Journal of Econometrics 156(1): 27–37.
8. Imbens, Guido W. (2010). “Better LATE than Nothing: Some Comments on Deaton (2009) and Heckman and Urzua.” Journal of Economic Literature 48(2): 399–423.
9. Abadie, Alberto, Alexis Diamond, and Jens Hainmueller (2010). “Synthetic Control Methods for Comparative Case Studies: Estimating the Effect of California’s Tobacco Control Program.” Journal of the American Statistical Association 105(490): 493–505.
10. Rosenbaum, Paul R. (2010). Observational Studies (Springer Series in Statistics). 2nd edition. Springer-Verlag New York: New York.
11. Bertrand, Marianne, Esther Duflo, and Sendhil Mullainathan (2004). “How Much Should We Trust Differences-in-Differences Estimates?” Quarterly Journal of Economics 119(1): 249–275.
12. Cameron, A. Colin, Jonah B. Gelbach, and Douglas L. Miller (2008). “Bootstrap-Based Improvements for Inference with Clustered Errors.” Review of Economics and Statistics 90(3): 414–427.
13. Cameron, A. Colin, Jonah B. Gelbach, and Douglas L. Miller (2011). “Robust Inference with Multi-way Clustering.” Journal of Business and Economic Statistics 29(2): 238–249.


Gelbach, Jonah B. and Jonathan Klick „Empirical Law and Economics“. In: Parisi, Francesco (ed) (2017). The Oxford Handbook of Law and Economics. Vol 1: Methodology and Concepts. NY: Oxford University Press.


_____________
Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. Translations: Dictionary of Arguments
The note [Concept/Author], [Author1]Vs[Author2] or [Author]Vs[term] resp. "problem:"/"solution:", "old:"/"new:" and "thesis:" is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition.
Economic Theories
Parisi I
Francesco Parisi (Ed)
The Oxford Handbook of Law and Economics: Volume 1: Methodology and Concepts New York 2017


Send Link
> Counter arguments against Economic Theories
> Counter arguments in relation to Empiricism

Authors A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Y   Z  


Concepts A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Z  



Ed. Martin Schulz, access date 2024-04-27
Legal Notice   Contact   Data protection declaration